Goto

Collaborating Authors

 14th international conference


Radio Foundation Models: Pre-training Transformers for 5G-based Indoor Localization

Ott, Jonathan, Pirkl, Jonas, Stahlke, Maximilian, Feigl, Tobias, Mutschler, Christopher

arXiv.org Artificial Intelligence

Artificial Intelligence (AI)-based radio fingerprinting (FP) outperforms classic localization methods in propagation environments with strong multipath effects. However, the model and data orchestration of FP are time-consuming and costly, as it requires many reference positions and extensive measurement campaigns for each environment. Instead, modern unsupervised and self-supervised learning schemes require less reference data for localization, but either their accuracy is low or they require additional sensor information, rendering them impractical. In this paper we propose a self-supervised learning framework that pre-trains a general transformer (TF) neural network on 5G channel measurements that we collect on-the-fly without expensive equipment. Our novel pretext task randomly masks and drops input information to learn to reconstruct it. So, it implicitly learns the spatiotemporal patterns and information of the propagation environment that enable FP-based localization. Most interestingly, when we optimize this pre-trained model for localization in a given environment, it achieves the accuracy of state-of-the-art methods but requires ten times less reference data and significantly reduces the time from training to operation.


Creative Beam Search: LLM-as-a-Judge For Improving Response Generation

Franceschelli, Giorgio, Musolesi, Mirco

arXiv.org Artificial Intelligence

Large language models are revolutionizing several areas, including artificial creativity. However, the process of generation in machines profoundly diverges from that observed in humans. In particular, machine generation is characterized by a lack of intentionality and an underlying creative process. We propose a method called Creative Beam Search that uses Diverse Beam Search and LLM-as-a-Judge to perform response generation and response validation. The results of a qualitative experiment show how our approach can provide better output than standard sampling techniques. We also show that the response validation step is a necessary complement to the response generation step.


Artificial General Intelligence: 14th International Conference, AGI 2021, Palo Alto, CA, USA, October 15–18, 2021, Proceedings (Lecture Notes in Computer Science): Goertzel, Ben, Iklé, Matthew, Potapov, Alexey: 9783030937577: Amazon.com: Books

#artificialintelligence

The 36 full papers presented in this book were carefully reviewed and selected from 50 submissions. The papers cover topics from foundations of AGI, to AGI approaches and AGI ethics, to the roles of systems biology, goal generation, and learning systems, and so much more.


MLDM 2018 : 14th International Conference on Machine Learning and Data Mining MLDM 2018

@machinelearnbot

The Aim of the Conference The aim of the conference is to bring together researchers from all over the world who deal with machine learning and data mining in order to discuss the recent status of the research and to direct further developments. Basic research papers as well as application papers are welcome. Paper submissions should be related but not limited to any of the following topics: association rules case-based reasoning and learning classification and interpretation of images, text, video conceptional learning and clustering Goodness measures and evaluaion (e.g. Long Paper The paper must be formatted in the Springer LNCS format. They should have at most 15 pages.


Cerebella: Automatic Generation of Nonverbal Behavior for Virtual Humans

Lhommet, Margot (Northeastern University) | Xu, Yuyu (Northeastern University) | Marsella, Stacy (Northeastern University)

AAAI Conferences

Our method automatically generates realistic nonverbal performances for virtual characters to accompany spo- ken utterances. It analyses the acoustic, syntactic, se- mantic and rhetorical properties of the utterance text and audio signal to generate nonverbal behavior such as such as head movements, eye saccades, and novel gesture animations based on co-articulation.